github.com | 您所在的位置:网站首页 › nextjs image tutorial › github.com |
Embeds text files into vectors, stores them on Pinecone, and enables semantic search using GPT3 and Langchain in a Next.js UI Langchain, Pinecone, and GPT with Next.js - Full Stack Starter This is a basic starter project for building with the following tools and APIs: Next.jsLangchainJSPineceone Vector DatabaseGPT3When I started diving into all of this, I felt while I understood some of the individual pieces, it was hard to piece together everything into a cohesive project. I hope this project is useful for anyone looking to build with this stack, and just needing something to start with. What we're buildingWe are building an app that takes text (text files), embeds them into vectors, stores them into Pinecone, and allows semantic searching of the data. For anyone wondering what Semantic search is, here is an overview (taken directly from ChatGPT4): Semantic search refers to a search approach that understands the user's intent and the contextual meaning of search queries, instead of merely matching keywords. It uses natural language processing and machine learning to interpret the semantics, or meaning, behind queries. This results in more accurate and relevant search results. Semantic search can consider user intent, query context, synonym recognition, and natural language understanding. Its applications range from web search engines to personalized recommendation systems. Running the appIn this section I will walk you through how to deploy and run this app. PrerequisitesTo run this app, you need the following: An OpenAI API keyPinecone API KeyUp and runningTo run the app locally, follow these steps: Clone this reposhgit clone [email protected]:dabit3/semantic-search-nextjs-pinecone-langchain-chatgpt.git Change into the directory and install the dependencies using either NPM or Yarn Copy .examp.env.local to a new file called .env.local and update with your API keys and environment. Be sure your environment is an actual environment given to you by Pinecone, like us-west4-gcp-free (Optional) - Add your own custom text or markdown files into the /documents folder. Run the app: shnpm run dev Need to knowWhen creating the embeddings and the index, it can take up to 2-4 minutes for the index to fully initialize. There is a settimeout function of 180 seconds in the utils that waits for the index to be created. If the initialization takes longer, then it will fail the first time you try to create the embeddings. If this happens, visit the Pinecone console to watch and wait for the status of your index being created to finish, then run the function again. Running a queryThe pre-configured app data is about the Lens protocol developer documentation, so it will only understand questions about it unless you replace it with your own data. Here are a couple of questions you might ask it with the default data What is the difference between Lens and traditional social platformsWhat is the difference between the Lens SDK and the Lens APIHow to query Lens data in bulk?The base of this project was guided by this Node.js tutorial, with some restructuring and ported over to Next.js. You can also follow them here on Twitter! Getting your dataI recommend checking out GPT Repository Loader which makes it simple to turn any GitHub repo into a text format, preserving the structure of the files and file contents, making it easy to chop up and save into pinecone using my codebase. To restore the repository download the bundle wget https://archive.org/download/github.com-dabit3-semantic-search-nextjs-pinecone-langchain-chatgpt_-_2023-05-29_14-00-54/dabit3-semantic-search-nextjs-pinecone-langchain-chatgpt_-_2023-05-29_14-00-54.bundle and run: git clone dabit3-semantic-search-nextjs-pinecone-langchain-chatgpt_-_2023-05-29_14-00-54.bundle Source: https://github.com/dabit3/semantic-search-nextjs-pinecone-langchain-chatgptUploader: dabit3Upload date: 2023-05-29 |
CopyRight 2018-2019 实验室设备网 版权所有 |